Privacy and AI: Navigating compliance with GDPR and the AI Act

PwC I 1:00 pm, 10th March


In the rapidly evolving landscape of Artificial Intelligence (AI), privacy concerns have become a paramount issue. As AI technologies become more integrated into our daily lives, ensuring that these systems comply with privacy regulations is crucial. For businesses and developers in Luxembourg, understanding and adhering to the General Data Protection Regulation (GDPR) and the AI Act is essential. This article explores the key points of attention for privacy in AI and provides guidance on maintaining compliance with these regulations.


Ensuring data minimisation and purpose limitation

One of the primary considerations of GDPR is data minimisation and purpose limitation. It is therefore important to collect only the data necessary for the specific purpose of the AI application/use. Avoiding excessive data collection can reduce privacy risks and regulatory scrutiny. Additionally, addressing potential biases in the data is crucial to ensure fair and non-discriminatory AI outcomes. Clearly defining the purpose of data collection and ensuring that data is used solely for that purpose is also crucial. Any secondary use of data should be compatible with the original purpose and must be communicated to the data subjects. Should the secondary purpose not be compatible with the original purpose, consent might be required, or a new lawful basis needs to be determined under GDPR considerations. As per AI, the correct selection of AI systems that have respected the rules of data collection (GPAI rules of the AI Act) as well as having proper data governance and access control in place (due to cloud considerations) is a must.


Being transparent 

Transparency and explainability are also critical. Informing data subjects about how their data is being used by the AI application, the logic behind AI decisions, and the potential impact on their privacy is essential. This includes providing clear and accessible privacy notices, including information whether the user is interacting with AI or a human. Ensuring that AI systems are explainable, meaning that their decision-making processes can be understood by humans, is particularly important for high-stakes decisions, such as those affecting employment, credit, or healthcare.


Respecting data subject rights

Respecting data subject rights is another key aspect. Allowing data subjects to access their data and request corrections if the data is inaccurate or incomplete is fundamental. Providing mechanisms for data subjects to request the deletion of their data or restrict its processing under certain conditions is also necessary, even if technological limitations might apply. Additionally, enabling data subjects to obtain and reuse their data across different services supports their rights.

Implementing robust security measures to protect personal data from unauthorised access, disclosure, alteration, or destruction is vital. This includes encryption, access controls, and regular security assessments, including thorough cybersecurity penetration tests. We should not forget about augmented measures for the AI systems, which should have controls implemented on the AI model itself, to ensure it is not jailbroken by malicious prompts. 


Pseudonymisation as a key data protection measure

The new pseudonymisation guidelines, adopted by the European Data Protection Board (EDPB) on 16 January 2025, provide a comprehensive framework for the use of pseudonymisation as a data protection measure. These guidelines clarify that pseudonymised data still constitutes personal data but highlight how pseudonymisation can reduce risks and facilitate the use of legitimate interests as a legal basis for processing, provided all GDPR requirements are met. 

Pseudonymisation can significantly facilitate the use of AI by ensuring that personal data is protected while still allowing for meaningful data analysis. By preventing the direct attribution of data to individuals, pseudonymisation helps mitigate privacy risks and enhances data security. This is particularly important in AI applications where large datasets are often used for training and improving algorithms. The guidelines also emphasise the importance of implementing technical measures and safeguards to ensure the confidentiality and integrity of pseudonymised data.


The AI Act

Organisations mustn't forget about the AI Act in their AI compliance journey. The AI Act will become fully applicable for private entities on 2 August 2026, but some rules are in force earlier – e.g. prohibition of specific use cases and AI literacy obligations are applicable as of 2 February 2025. Though the AI Act rules may seem stringent, pseudonymising the data might be a way forward, as you replace identifiable information with pseudonyms, therefore limiting the risks of unauthorised access and data breaches. Furthermore, limiting these risks through pseudonymisation will open further opportunities to use these data AI tools and further develop your business and enhance operational efficiencies.

Establishing clear governance structures and accountability mechanisms for AI systems is also important. This includes appointing Data Protection Officers (DPOs), conducting Data Protection Impact Assessments (DPIAs), and maintaining detailed Records of data Processing Activities (RoPA) when taking into consideration GDPR requirements. As per AI, AI officers or Fundamental Rights Impact officers might be applicable. 

To ensure compliance with GDPR and the AI Act, conducting regular DPIAs is essential as well as ensuring human oversight, which is a key pilar of the AI Act. DPIAs help identify and implement appropriate mitigation measures to privacy risks associated with AI systems, ensuring that data processing activities comply with GDPR requirements and that potential impacts on data subjects are thoroughly assessed. Implementing privacy by design and by default is another crucial step. Integrating privacy considerations into the design and development of AI systems from the outset, including implementing technical and organisational measures that prioritise data protection and privacy, is necessary.

Staying informed about regulatory developments is also important. The regulatory landscape for AI is continuously evolving, so staying updated on changes to the AI Act, as well as any new privacy-related guidelines or best practices issued by regulatory authorities, is crucial. The EDPB recently published an opinion on AI models [1], emphasising the importance of GDPR principles in supporting responsible AI. This opinion provides guidance on when and how AI models can be considered anonymous, the use of legitimate interest as a legal basis, and the implications of using personal data processed unlawfully. Engaging with Data Protection Authorities (DPAs) AI Act enforcement designated entities (also CNPD in Luxembourg) and maintaining open communication with them can help ensure that AI systems remain compliant and that any potential issues are addressed proactively.


Promoting privacy and AI awareness

Fostering a culture of privacy (and AI) awareness within an organisation is essential. Promoting privacy and AI awareness, providing regular training, and offering resources to employees to ensure they understand their responsibilities and the importance of data protection will help maintain compliance. The same is true about AI systems and processes. This process involves identifying needs, tailoring content to the roles and responsibilities of each role or group and establishing benchmarks for measurable performance.

As AI continues to advance, maintaining privacy and regulatory compliance is more important than ever. By focusing on key points of attention such as data minimisation, transparency, and security, and by adhering to the principles of GDPR and the AI Act, businesses and developers can ensure that their AI systems respect privacy and build trust with users. Staying informed about regulatory developments and fostering a culture of privacy awareness will be crucial in navigating the complex landscape of AI and privacy. By prioritising these practices, organisations can harness the power of AI while safeguarding the privacy rights of individuals, ultimately contributing to a more ethical and trustworthy AI ecosystem.


[1] https://www.edpb.europa.eu/system/files/2024-12/edpb_opinion_202428_ai-models_en.pdf

Frédéric Vonner, Advisory Partner at PwC Luxembourg

Antonin JakubsePrivacy Senior Manager at PwC Luxembourg 


Subscribe to our Newsletters

Info Message: By continuing to use the site, you agree to the use of cookies. Privacy Policy Accept